243 research outputs found

    Nanoparticle formation and coating using supercritical fluids

    Get PDF
    Jet breakup phenomenon and nanoparticle formation The focus of this dissertation is to study the SAS process to manufacture nanoparticles with minimum agglomeration, controlled size and size distribution. Solution jet breakup and solvent evaporation into supercritical fluid (SC CO2) is studied using high speed charged coupled device (CCD) camera facilitated with double shot particle image veloimetry (PIV) laser and a high pressure view cell. Particles formation using SAS is studied. Polyvinylpyrrolidone (PVP) particles are formed using micron size capillary nozzles and combination of thermodynamically good and poor solvents in order to achieve nano-sized particles with reduced agglomeration and narrow size distribution. Effects of operational parameters on physiochemical properties of particles are investigated. Since the proposed method is based on general thermodynamic properties of polymer-solvent systems, it should be applicable to a wide variety of polymers for applications ranging from the improvement of the flow and packing properties of powders to the control of particle interaction with their external surroundings for drug delivery systems. Fine particle coating and encapsulations using supercritical fluids. In certain applications, particle surfaces need to be modified or functionalized by coating them with another material to serve a specific purpose. As nanoparticles are extremely cohesive, it is very difficult to coat an individual particle by traditional methods. In this research, nanoparticles coating is investigated through supercritical fluid-based methods. Agglomeration of particles is reduced by combining poor solvent and ultrasonic techniques. The first technique uses a proprietary co-axial ultrasonic nozzle to spray the solution suspension into the SC CO2. Ultrasound is very effective in breaking agglomerates, and the introduction of the co-axial flow enables CO2 to not only serve as an antisolvent, but also as a mixing enhancer. The second technique uses a combination of thermodynamically good and poor solvents to tune the supersaturation of the polymer which serves as the coating material. Other methods like raid expansion of supercritical solution (RESS) and particles from gas saturated solution (PGSS) are also investigated and compared with SAS. Syneresis of silica gel Effects of gravity, silica concentration in gel and time on syneresis are studied by exposing the simulants of gel propellants to higher gravities. Scanning electron microscopy (SEM) and nuclear magnetic resonance (NMR) are used to characterize the gel. Based on results of experimental studies, a multi-scale computational strategy for modeling gel formation and syneresis is proposed. Based on the analysis of the existing literature, directions for experimental and theoretical approaches for particles formation and coating are proposed, and form the main parts of this thesis. This summary section outlines the major components of proposed research; first, important features of the nanoparticles formation using SAS techniques are discussed followed by the nanocoatings and finally syneresis of silica gels

    Synthesis of nano/micro particles using supercritical method and particle characterization

    Get PDF
    The thesis work consists of the two parts, jet behavior of solvents (ethanol and acetone) and the particle formation using SAS method. In the first part, the study of the effects of process parameters like temperature, pressure, injection velocity and internal diameter of nozzle on liquid jets are studied. The critical pressure for which liquid jet of solvents changes to gas - like jet, is investigated. In the second part, the experiments are done to study the process of the particle formation using SAS method. Effects of process parameters like pressure, injection velocity of solution, the inner diameter of nozzle on properties of particles are studied. Few modifications are done in setup to make the experiments easy and effective. Also, a few recommendations are proposed regarding process parameters for future experiments as well as to improve set up

    Middleware specialization using aspect oriented programming

    Full text link
    Standardized middleware is used to build large distributed real-time and enterprise (DRE) systems. These middleware are highly flexible and support a large number of features since they have to be applicable to a wide range of domains and applications. This generality and flexibility, however, often causes many performance and footprint overheads par-ticularly for product line architectures, which have a well-defined scope smaller than that of the middleware yet must leverage its benefits, such as reusability. To alleviate this tension thus a key objective is to specialize the middleware, which comprises removing the sources of excessive general-ity while simultaneously optimizing the required features of middleware functionality. To meet this objective this paper describes how we have applied Aspect-Oriented Program-ming (AOP) in a novel manner to address these challenges. Although AOP is primarily used for separation of concerns, we use it to specialize middleware. Aspects are used to se-lect the specific set of features needed by the product line. Aspect weaving is subsequently used to specialize the mid-dleware. This paper describes the key motivation for our research, identifies the challenges developing middleware-based product lines and shows how to resolve those using aspects. The results applying our AOP-based specialization techniques to event demultiplexing middleware for the case of single threaded implementation showed 3 % decrease in latency and 2 % increase in throughput, while in the thread pool implementation showed 4 % decrease in latency and 3% increase in throughput

    Towards a Performance Interference-aware Virtual Machine Placement Strategy for Supporting Soft Real-time Applications in the Cloud

    Get PDF
    REACTION 2014. 3rd International Workshop on Real-time and Distributed Computing in Emerging Applications. Rome, Italy. December 2nd, 2014.It is standard practice for cloud service providers (CSPs) to overbook physical system resources to maximize the resource utilization and make their business model more profitable. Resource overbooking can lead to performance interference, however, among the virtual machines (VMs) hosted on the physical resources causing performance un-predictability for soft real-time applications hosted in the VMs, which is unacceptable to these applications. Balancing these conflicting requirements needs a careful design of the placement strategies for hosting soft real-time applications such that the performance interference effects are minimized while still allowing resource overbooking. These placement decisions cannot be made offline because workloads change at run time. Moreover, satisfying the priorities of collocated VMs may require VM migrations, which require an online solution. This paper presents a machine learning-based, online placement solution to this problem where the system is trained using a publicly available trace of a large data center owned by Google. Our approach first classifies the VMs based on their historic mean CPU and memory usage, and performance features. Subsequently, it learns the best patterns of collocating the classified VMs by employing machine learning techniques. These extracted patterns are those that provide the lowest performance interference level on the specified host machines making them amenable to hosting soft real-time applications while still allowing resource overbooking.This work was supported in part by the National Science Foundation CAREER CNS 0845789 and AFOSR DDDAS FA9550-13-1-0227.Publicad

    A Reinforcement Learning Approach for Performance-aware Reduction in Power Consumption of Data Center Compute Nodes

    Full text link
    As Exascale computing becomes a reality, the energy needs of compute nodes in cloud data centers will continue to grow. A common approach to reducing this energy demand is to limit the power consumption of hardware components when workloads are experiencing bottlenecks elsewhere in the system. However, designing a resource controller capable of detecting and limiting power consumption on-the-fly is a complex issue and can also adversely impact application performance. In this paper, we explore the use of Reinforcement Learning (RL) to design a power capping policy on cloud compute nodes using observations on current power consumption and instantaneous application performance (heartbeats). By leveraging the Argo Node Resource Management (NRM) software stack in conjunction with the Intel Running Average Power Limit (RAPL) hardware control mechanism, we design an agent to control the maximum supplied power to processors without compromising on application performance. Employing a Proximal Policy Optimization (PPO) agent to learn an optimal policy on a mathematical model of the compute nodes, we demonstrate and evaluate using the STREAM benchmark how a trained agent running on actual hardware can take actions by balancing power consumption and application performance.Comment: This manuscript consists of a total of 10 pages with 8 figures and 3 tables and is awaiting its publication at IC2E-202

    A Framework for Effective Placement of Virtual Machine Replicas for Highly Available Performance-sensitive Cloud-based Applications

    Get PDF
    REACTION 2012. 1st International workshop on Real-time and distributed computing in emerging applications. December 4th, 2012, San Juan, Puerto Rico.Applications are increasingly being deployed in the Cloud due to benefits stemming from economy of scale, scalability, flexibility and utility-based pricing model. Although most cloud-based applications have hitherto been enterprisestyle, there is a new trend towards hosting performancesensitive applications in the cloud that demand both high availability and good response times. In the current stateof- the-art in cloud computing research, there does not exist solutions that provide both high availability and acceptable response times to these applications in a way that also optimizes resource consumption in data centers, which is a key consideration for cloud providers. This paper addresses this dual challenge by presenting a design of a fault-tolerant framework for virtualized data centers that makes two important contributions. First, it describes an architecture of a fault-tolerance framework that can be used to automatically deploy replicas of virtual machines in data centers in a way that optimizes resources while assures availability and responsiveness. Second, it describes a specific formulation of a replica deployment combinatorial optimization problem that can be plugged into our strategizable deployment framework.This work was supported in part by the National Science Foundation NSF SHF/CNS Award CNS 0915976 and NSF CAREER CNS 0845789. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation

    Performance evaluation of the reactor pattern using the OMNeT++ simulator

    Full text link
    The design of large-scale, distributed, performance-sensitive systems presents numerous challenges due to their network-centric nature and stringent quality of service (QoS) require-ments. Standardized middleware implementations provide the key building blocks necessary to address these require-ments of the distributed systems. However, middleware are designed to be applicable for a wide range of domains and applications, which results in system developers requiring to choose the right set of building blocks to design their system. To reduce the impact on development costs and time-to-market, decisions on the right set of building blocks to use in systems design must be made as early as possible in system design. This paper addresses this concern by describ-ing a model-driven systems simulation approach to analyze, catch and rectify incorrect system design decisions at design-time. In this paper we focus on model-driven OMNeT++ simulation of the Reactor pattern, which provides event de-multiplexing and handling capability. Our experience with modeling the Reactor shows that this approach can be ex-tended to the performance analysis of other pattern-based blocks and indeed in the long term to the entire composed middleware framework
    • …
    corecore